Both clustering and outlier detection play an important role for meteorological measurements. We present the AWT algorithm, a clustering algorithm for time series data that also performs implicit outlier detection during the clustering. AWT integrates ideas of several well-known K-Means clustering algorithms. It chooses the number of clusters automatically based on a user-defined threshold parameter, and it can be used for heterogeneous meteorological input data as well as for data sets that exceed the available memory size. We apply AWT to crowd sourced 2-m temperature data with an hourly resolution from the city of Vienna to detect outliers and to investigate if the final clusters show general similarities and similarities with urban land-use characteristics. It is shown that both the outlier detection and the implicit mapping to land-use characteristic is possible with AWT which opens new possible fields of application, specifically in the rapidly evolving field of urban climate and urban weather.
translated by 谷歌翻译
Multi-view data containing complementary and consensus information can facilitate representation learning by exploiting the intact integration of multi-view features. Because most objects in real world often have underlying connections, organizing multi-view data as heterogeneous graphs is beneficial to extracting latent information among different objects. Due to the powerful capability to gather information of neighborhood nodes, in this paper, we apply Graph Convolutional Network (GCN) to cope with heterogeneous-graph data originating from multi-view data, which is still under-explored in the field of GCN. In order to improve the quality of network topology and alleviate the interference of noises yielded by graph fusion, some methods undertake sorting operations before the graph convolution procedure. These GCN-based methods generally sort and select the most confident neighborhood nodes for each vertex, such as picking the top-k nodes according to pre-defined confidence values. Nonetheless, this is problematic due to the non-differentiable sorting operators and inflexible graph embedding learning, which may result in blocked gradient computations and undesired performance. To cope with these issues, we propose a joint framework dubbed Multi-view Graph Convolutional Network with Differentiable Node Selection (MGCN-DNS), which is constituted of an adaptive graph fusion layer, a graph learning module and a differentiable node selection schema. MGCN-DNS accepts multi-channel graph-structural data as inputs and aims to learn more robust graph fusion through a differentiable neural network. The effectiveness of the proposed method is verified by rigorous comparisons with considerable state-of-the-art approaches in terms of multi-view semi-supervised classification tasks.
translated by 谷歌翻译
人步态周期中的哪些联合相互作用可以用作生物特征?大多数关于步态识别的方法都缺乏解释性。我们通过图形Granger因果推断提出了步态序列的可解释特征表示。标准化运动捕获格式中一个人的步态序列构成了一组3D关节空间轨迹,被设想为及时相互作用的关节的因果系统。我们将图形Granger模型(GGM)应用于关节之间的所谓Granger因果图,以作为对人步态的歧视性和视觉上解释的表示。我们通过建立的分类和类别评估指标评估GGM特征空间中的11个距离函数。我们的实验表明,根据度量,GGM最合适的距离函数是总规范距离和KY-Fan 1-Norm距离。实验还表明,GGM能够检测到最歧视性的关节相互作用,并且在正确的分类率和戴维斯 - 博尔丁指数中优于五个相关的可解释模型。拟议的GGM模型可以作为运动机能学步态分析或视频监视中的步态识别的补充工具。
translated by 谷歌翻译
霍克斯过程是一类特殊的时间点过程,表现出自然的因果关系,因为过去事件的发生可能会增加未来事件的可能性。在多维时间过程的维度之间发现潜在影响网络在学科中至关重要,在这些学科中,高频数据将模拟,例如在财务数据或地震数据中。本文处理了多维鹰派过程中学习Granger-Causal网络的问题。我们将此问题提出为模型选择任务,其中我们遵循最小描述长度(MDL)原理。此外,我们建议使用蒙特卡洛方法提出一种用于基于MDL的推理的一般算法,并将其用于因果发现问题。我们将算法与关于合成和现实世界财务数据的最新基线方法进行了比较。合成实验表明,与基线方法相比,与数据尺寸相比,我们方法不可能的图形发现的优势。 G-7债券价格数据的实验结果与专家知识一致。
translated by 谷歌翻译
为低功耗设备上的高性能计算选择适当的编程范例可以很有用来加快计算。许多Android设备都有一个集成的GPU,虽然没有正式支持 - OpenCL框架可以在Android设备上用于寻址这些GPU。 OpenCL支持线程和数据并行性。使用GPU的应用程序必须考虑到用户可以在任何时刻暂停用户或Android操作系统。我们已创建一个包装器库,允许在Android设备上使用OpenCL。已经写入OpenCL程序可以用几乎没有修改来执行。我们使用此库将DBSCAN和kmeans算法的性能与同一设备上的其他单个和多线程实现的ARM-V7平板电脑的集成GPU进行比较。我们调查了哪些编程范式和语言允许执行速度和能耗之间的最佳权衡。在Android设备上使用GPU进行HPC,可以帮助在遥控区域下进行计算密集型机器学习或数据挖掘任务,在恶劣的环境条件下以及能源供应是一个问题的领域。
translated by 谷歌翻译
增加光伏(PV)工厂的部署需要在模态中自动检测故障PV模块,例如红外(IR)图像。最近,深入学习已经为此受欢迎。然而,相关的作品通常是来自相同分布的样本列车和测试数据忽略不同光伏工厂数据之间的域移位的存在。相反,我们将故障检测视为更现实无监督的域适应问题,我们在训练一个源PV工厂的标记数据并在另一个目标工厂进行预测。我们培训具有监督对比损失的Reset-34卷积神经网络,在其中我们采用K-Collect Exband Classifier来检测异常。我们的方法在接收器下实现令人满意的区域(Auroc),在九个源和目标数据集的九种组合中的达到73.3%至96.6%,其中8.5%的8.5%是异常的。在某些情况下,它甚至优于二进制交叉熵分类器。固定决策阈值,这导致79.4%和77.1%分别正确分类正常和异常图像。大多数错误分类的异常具有低严重程度,例如热二极管和小型热点。我们的方法对封锁率设置不敏感,汇聚快速并可靠地检测未知类型的异常,使其适合实践。可能的用途是自动PV工厂检测系统或通过过滤普通图像来简化IR数据集的手动标记。此外,我们的工作为使用无监督域适应的PV模块故障检测提供了更现实的观点,以开发具有有利的概括功能的更加性能的方法。
translated by 谷歌翻译
We introduce organism networks, which function like a single neural network but are composed of several neural particle networks; while each particle network fulfils the role of a single weight application within the organism network, it is also trained to self-replicate its own weights. As organism networks feature vastly more parameters than simpler architectures, we perform our initial experiments on an arithmetic task as well as on simplified MNIST-dataset classification as a collective. We observe that individual particle networks tend to specialise in either of the tasks and that the ones fully specialised in the secondary task may be dropped from the network without hindering the computational accuracy of the primary task. This leads to the discovery of a novel pruning-strategy for sparse neural networks
translated by 谷歌翻译
Common to all different kinds of recurrent neural networks (RNNs) is the intention to model relations between data points through time. When there is no immediate relationship between subsequent data points (like when the data points are generated at random, e.g.), we show that RNNs are still able to remember a few data points back into the sequence by memorizing them by heart using standard backpropagation. However, we also show that for classical RNNs, LSTM and GRU networks the distance of data points between recurrent calls that can be reproduced this way is highly limited (compared to even a loose connection between data points) and subject to various constraints imposed by the type and size of the RNN in question. This implies the existence of a hard limit (way below the information-theoretic one) for the distance between related data points within which RNNs are still able to recognize said relation.
translated by 谷歌翻译
Overfitting is a problem in Convolutional Neural Networks (CNN) that causes poor generalization of models on unseen data. To remediate this problem, many new and diverse data augmentation methods (DA) have been proposed to supplement or generate more training data, and thereby increase its quality. In this work, we propose a new data augmentation algorithm: VoronoiPatches (VP). We primarily utilize non-linear recombination of information within an image, fragmenting and occluding small information patches. Unlike other DA methods, VP uses small convex polygon-shaped patches in a random layout to transport information around within an image. Sudden transitions created between patches and the original image can, optionally, be smoothed. In our experiments, VP outperformed current DA methods regarding model variance and overfitting tendencies. We demonstrate data augmentation utilizing non-linear re-combination of information within images, and non-orthogonal shapes and structures improves CNN model robustness on unseen data.
translated by 谷歌翻译
Mechanistic cardiac electrophysiology models allow for personalized simulations of the electrical activity in the heart and the ensuing electrocardiogram (ECG) on the body surface. As such, synthetic signals possess known ground truth labels of the underlying disease and can be employed for validation of machine learning ECG analysis tools in addition to clinical signals. Recently, synthetic ECGs were used to enrich sparse clinical data or even replace them completely during training leading to improved performance on real-world clinical test data. We thus generated a novel synthetic database comprising a total of 16,900 12 lead ECGs based on electrophysiological simulations equally distributed into healthy control and 7 pathology classes. The pathological case of myocardial infraction had 6 sub-classes. A comparison of extracted features between the virtual cohort and a publicly available clinical ECG database demonstrated that the synthetic signals represent clinical ECGs for healthy and pathological subpopulations with high fidelity. The ECG database is split into training, validation, and test folds for development and objective assessment of novel machine learning algorithms.
translated by 谷歌翻译